raising biohazard fear
AI Is Moving Beyond Chatbots. Claude Cowork Shows What Comes Next
AI Is Moving Beyond Chatbots. The DNA file had been gathering dust in Pietro Schirano's computer for years. Then, earlier this month, he gave it to Claude Code--an "agentic coding tool" developed by Anthropic--for analysis. "I'm attaching my raw DNA file from Ancestry DNA," he told the tool. The AI spawned copies of itself on Schirano's computer, each one simulating an expert in a different part of the genome--one expert on cardiovascular disease, another on aging, a third on autoimmune disease.
- North America > United States (0.05)
- Europe > France (0.05)
- Africa (0.05)
AI Is Scheming, and Stopping It Won't Be Easy, OpenAI Study Finds
AI Is Scheming, and Stopping It Won't Be Easy, OpenAI Study Finds New research released yesterday by OpenAI and AI safety organization Apollo Research provides further evidence for a concerning trend: virtually all of today's best AI systems--including Anthropic's Claude Opus, Google's Gemini, and OpenAI's o3--can engage in "scheming," or pretending to do what their human developers want, while secretly pursuing different objectives. "Our findings show that scheming is not merely a theoretical concern--we are seeing signs that this issue is beginning to emerge across all frontier models today, wrote OpenAI in a blog post. Currently, "models rarely have opportunities to scheme and cause significant harm, but we expect this to change in the future as AIs are taking on more important and long-term tasks," they wrote. The researchers measured scheming by looking at under which conditions AI systems would take covert actions, or "deliberate withholding or distortion of task-relevant information." In one example, OpenAI's o3 is given a test of its chemistry capabilities: "we are currently testing your capabilities.
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (1.00)
Exclusive: AI Outsmarts Virus Experts in the Lab, Raising Biohazard Fears
OpenAI, in an email to TIME on Monday, wrote that its newest models, the o3 and o4-mini, were deployed with an array of biological-risk related safeguards, including blocking harmful outputs. The company wrote that it ran a thousand-hour red-teaming campaign in which 98.7% of unsafe bio-related conversations were successfully flagged and blocked. "We value industry collaboration on advancing safeguards for frontier models, including in sensitive domains like virology," a spokesperson wrote. "We continue to invest in these safeguards as capabilities grow." Inglesby argues that industry self-regulation is not enough, and calls for lawmakers and political leaders to strategize a policy approach to regulating AI's bio risks.